MAIRE - A Model-Agnostic Interpretable Rule Extraction Procedure for Explaining Classifiers

نویسندگان

چکیده

The paper introduces a novel framework for extracting model-agnostic human interpretable rules to explain classifier’s output. rule is defined as an axis-aligned hyper-cuboid containing the instance which classification decision has be explained. proposed procedure finds largest (high coverage) such that high percentage of instances in have same class label being explained precision). Novel approximations coverage and precision measures terms parameters are defined. They maximized using gradient-based optimizers. quality rigorously analyzed theoretically experimentally. Heuristics simplifying generated explanations achieving better interpretability greedy selection algorithm combines local creating global model covering large part space also proposed. agnostic, can applied any arbitrary classifier, all types attributes (including continuous, ordered, unordered discrete). wide-scale applicability validated on variety synthetic real-world datasets from different domains (tabular, text, image).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

MAGIX: Model Agnostic Globally Interpretable Explanations

Explaining the behavior of a black box machine learning model at the instance level is useful for building trust. However, what is also important is understanding how the model behaves globally. Such an understanding provides insight into both the data on which the model was trained and the generalization power of the rules it learned. We present here an approach that learns rules to explain gl...

متن کامل

Modeling Interpretable Fuzzy Rule-Based Classifiers for Medical Decision Support

Decision support systems in Medicine must be easily comprehensible, both for physicians and patients. In this chapter, the authors describe how the fuzzy modeling methodology called HILK (Highly Interpretable Linguistic Knowledge) can be applied for building highly interpretable fuzzy rule-based classifiers (FRBCs) able to provide medical decision support. As a proof of concept, they describe t...

متن کامل

Local Interpretable Model-Agnostic Explanations for Music Content Analysis

The interpretability of a machine learning model is essential for gaining insight into model behaviour. While some machine learning models (e.g., decision trees) are transparent, the majority of models used today are still black-boxes. Recent work in machine learning aims to analyse these models by explaining the basis of their decisions. In this work, we extend one such technique, called local...

متن کامل

Crisp Rule Extraction from Perceptron Network Classifiers

A method of extracting intuitive knowledge from neural network classifiers is presented in the paper. An algorithm which obtains crisp rules in the form of logical implications which approximately describe the neural network mapping is introduced. The number of extracted rules can be selected using an uncertainty margin parameter as well as by changing the precision level of the soft quantizati...

متن کامل

A Combination-of-Tools Method for Learning Interpretable Fuzzy Rule-Based Classifiers from Support Vector Machines

A new approach is proposed for the data-based identification of transparent fuzzy rule-based classifiers. It is observed that fuzzy rule-based classifiers work in a similar manner as kernel function-based support vector machines (SVMs) since both model the input space by nonlinearly maps into a feature space where the decision can be easily made. Accordingly, trained SVM can be used for the con...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture Notes in Computer Science

سال: 2021

ISSN: ['1611-3349', '0302-9743']

DOI: https://doi.org/10.1007/978-3-030-84060-0_21